The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
In this paper, we revisit the class of iterative shrinkage-thresholding algorithms (ISTA) for solving the linear inverse problem with sparse representation, which arises in signal and image processing. It is shown in the numerical experiment to deblur an image that the convergence behavior in the logarithmic-scale ordinate tends to be linear instead of logarithmic, approximating to be flat. Making meticulous observations, we find that the previous assumption for the smooth part to be convex weakens the least-square model. Specifically, assuming the smooth part to be strongly convex is more reasonable for the least-square model, even though the image matrix is probably ill-conditioned. Furthermore, we improve the pivotal inequality tighter for composite optimization with the smooth part to be strongly convex instead of general convex, which is first found in [Li et al., 2022]. Based on this pivotal inequality, we generalize the linear convergence to composite optimization in both the objective value and the squared proximal subgradient norm. Meanwhile, we set a simple ill-conditioned matrix which is easy to compute the singular values instead of the original blur matrix. The new numerical experiment shows the proximal generalization of Nesterov's accelerated gradient descent (NAG) for the strongly convex function has a faster linear convergence rate than ISTA. Based on the tighter pivotal inequality, we also generalize the faster linear convergence rate to composite optimization, in both the objective value and the squared proximal subgradient norm, by taking advantage of the well-constructed Lyapunov function with a slight modification and the phase-space representation based on the high-resolution differential equation framework from the implicit-velocity scheme.
translated by 谷歌翻译
Nesterov's accelerated gradient descent (NAG) is one of the milestones in the history of first-order algorithms. It was not successfully uncovered until the high-resolution differential equation framework was proposed in [Shi et al., 2022] that the mechanism behind the acceleration phenomenon is due to the gradient correction term. To deepen our understanding of the high-resolution differential equation framework on the convergence rate, we continue to investigate NAG for the $\mu$-strongly convex function based on the techniques of Lyapunov analysis and phase-space representation in this paper. First, we revisit the proof from the gradient-correction scheme. Similar to [Chen et al., 2022], the straightforward calculation simplifies the proof extremely and enlarges the step size to $s=1/L$ with minor modification. Meanwhile, the way of constructing Lyapunov functions is principled. Furthermore, we also investigate NAG from the implicit-velocity scheme. Due to the difference in the velocity iterates, we find that the Lyapunov function is constructed from the implicit-velocity scheme without the additional term and the calculation of iterative difference becomes simpler. Together with the optimal step size obtained, the high-resolution differential equation framework from the implicit-velocity scheme of NAG is perfect and outperforms the gradient-correction scheme.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
Oriented normals are common pre-requisites for many geometric algorithms based on point clouds, such as Poisson surface reconstruction. However, it is not trivial to obtain a consistent orientation. In this work, we bridge orientation and reconstruction in implicit space and propose a novel approach to orient point cloud normals by incorporating isovalue constraints to the Poisson equation. Our key observation is that when using a point cloud with consistently oriented normals as the input for implicit surface reconstruction, the indicator function values of the sample points should be close to the isovalue of the surface. Based on this observation and the Poisson equation, we propose an optimization formulation that combines isovalue constraints with local consistency requirements for normals. We optimize normals and implicit functions simultaneously and solve for a globally consistent orientation. Thanks to the sparsity of the linear system, our method can work on an average laptop with reasonable computational time. Experiments show that our method can achieve high performance in non-uniform and noisy data and manage varying sampling densities, artifacts, multiple connected components, and nested surfaces.
translated by 谷歌翻译
压缩高准确性卷积神经网络(CNN)的最新进展已经见证了实时对象检测的显着进步。为了加速检测速度,轻质检测器总是使用单路主链几乎没有卷积层。但是,单路径架构涉及连续的合并和下采样操作,始终导致粗糙和不准确的特征图,这些图形不利,无法找到对象。另一方面,由于网络容量有限,最近的轻质网络在表示大规模的视觉数据方面通常很弱。为了解决这些问题,本文提出了一个名为DPNET的双路径网络,并采用了实时对象检测的轻巧注意方案。双路径体系结构使我们能够与提取物相对于高级语义特征和低级对象详细信息。尽管DPNET相对于单路检测器几乎具有重复的形状,但计算成本和模型大小并未显着增加。为了增强表示能力,轻巧的自相关模块(LSCM)旨在捕获全局交互,只有很少的计算开销和网络参数。在颈部,LSCM扩展到轻质互相关模块(LCCM),从而捕获相邻尺度特征之间的相互依赖性。我们已经对Coco和Pascal VOC 2007数据集进行了详尽的实验。实验结果表明,DPNET在检测准确性和实施效率之间实现了最新的权衡。具体而言,DPNET在MS COCO Test-DEV上可实现30.5%的AP,Pascal VOC 2007测试集上的81.5%地图,MWITH近250万型号,1.04 GFLOPS,1.04 GFLOPS和164 fps和196 fps和196 fps,320 x 320输入图像的320 x 320输入图像。
translated by 谷歌翻译
在一阶算法的历史中,Nesterov的加速梯度下降(NAG)是里程碑之一。但是,长期以来,加速的原因一直是一个谜。直到[Shi等,2021]中提出的高分辨率微分方程框架之前,梯度校正的存在尚未得到揭示。在本文中,我们继续研究加速现象。首先,我们基于精确的观察结果和$ L $ SMOTH功能的不等式提供了明显的简化证明。然后,提出了一个新的隐式高分辨率差分方程框架,以及相应的隐式 - 速度版本的相位空间表示和lyapunov函数,以研究迭代序列$ \ {x_k \} _的迭代序列的收敛行为{k = 0}^{\ infty} $的nag。此外,从两种类型的相空间表示形式中,我们发现梯度校正所起的作用等同于按速度隐含在梯度中包含的作用,其中唯一的区别来自迭代序列$ \ \ {y_ {y_ {k} \} _ {k = 0}^{\ infty} $由$ \ {x_k \} _ {k = 0}^{\ infty} $代替。最后,对于NAG的梯度规范最小化是否具有更快的速率$ O(1/K^3)$的开放问题,我们为证明提供了一个积极的答案。同时,为$ r> 2 $显示了目标值最小化$ o(1/k^2)$的更快的速度。
translated by 谷歌翻译
人类认知具有组成。我们通过将场景分解为不同的概念(例如,对象的形状和位置)并学习这些概念的各个概念(例如,运动定律)或人造(例如,游戏的定律)来理解场景。 。这些定律的自动解析表明该模型能够理解场景的能力,这使得分析在许多视觉任务中起着核心作用。在本文中,我们提出了一个深层可变模型,用于解析(CLAP)。拍手通过编码编码架构来实现类似人类的组成能力,以表示现场的概念为潜在变量,并进一步采用特定于概念的随机功能,并在潜在空间中实例化,以捕获每个概念的法律。我们的实验结果表明,拍手优于比较多个视觉任务中的基线方法,包括直观的物理,抽象的视觉推理和场景表示。此外,拍手可以在场景中学习特定于概念的法律,而无需监督,并且可以通过修改相应的潜在随机功能来编辑法律,从而验证其可解释性和可操作性。
translated by 谷歌翻译
建模城市环境中的网络级交通流量如何变化对于运输,公共安全和城市规划中的决策有用。交通流量系统可以视为一个动态过程,随着时间的推移,状态之间(例如,每个道路段的交通量)之间过渡。在现实世界中的流量系统中,诸如交通信号控制或可逆车道更改之类的交通操作动作,该系统的状态受历史状态和交通操作的行动的影响。在本文中,我们考虑了在现实世界中建模网络级交通流量的问题,在现实世界中,可用数据稀疏(即仅观察到交通系统的一部分)。我们提出了Dtignn,该方法可以预测稀疏数据的网络级流量流。 Dtignn将交通系统建模为受交通信号影响的动态图,学习以运输的基本过渡方程为基础的过渡模型,并预测未来的交通状态在此过程中归类。通过全面的实验,我们证明了我们的方法优于最先进的方法,并且可以更好地支持运输中的决策。
translated by 谷歌翻译
目前正在辩论中,将人工智能应用于科学问题(即科学的AI)。但是,科学问题与传统的问题,图像,文本等等传统问题有很大不同,在这些问题中,由于不平衡的科学数据和物理设置的复杂效果出现了新的挑战。在这项工作中,我们证明了深卷卷神经网络(CNN)在存在强热波动和不平衡数据的情况下重建晶格拓扑(即自旋连接性)的有效性。以Glauber动力学为例,以动力学模型为例,CNN映射了从特定的初始配置(称为演化实例)演变为时期的局部磁矩(单个节点特征),以映射到概率的概率可能的耦合。我们的方案与以前可能需要有关节点动力学的知识,来自扰动的响应或统计量的评估(例如相关性或转移熵)与许多进化实例的评估。微调避免了高温下强烈的热波动引起的“贫瘠高原”。可以进行准确的重建,如果热波动在相关性上占主导地位,从而总体上失败的统计方法。同时,我们揭示了CNN的概括,以处理从不太初始旋转构型和带有未经晶格的实例演变而来的实例。我们在几乎“双重指数”大型样本空间中使用不平衡的数据提出了一个关于学习的公开问题。
translated by 谷歌翻译